CS6750 HCI Notes

Author: Taichi Nakatani

1.1 Introduction to HCI

Interaction:

  • Human
  • Computer
  • Task

We might be experts at interacting with computers, but that doesn't make us experts at designing interactions between other humans and computers.

Overview of HCI

  • Human factor engineering: Designing interactions between people and products, systems or devices.
    • Merger of engineering and psychology
  • HCI vs UI Design: UI design is more concerned with on-screen interaction.
  • HCI vs UX Design: UX design dicates how humans interact with computers (while HCI understand this interaction). Symbiotic relationship.

What is HCI:

  • Research: needfinding, prototyping, evaluation
  • Design: distributed cognition, mental models, universal design
  • research/design is symbiotic

Reference:

  • Richard Mander, Gitta Salomon, and Yin Yin Wong. 1992. A “pile” metaphor for supporting casual organization of information. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '92). Association for Computing Machinery, New York, NY, USA, 627–634. https://doi.org/10.1145/142750.143055
    • How do people organize rapid flow of information in their work spaces.
    • People organized things in 'piles' of related materials. Mimic it in computer interface.

1.2 Introduction to CS6750

Learning goals:

  1. Understand common principles in HCI
  2. Understand the design life cycle
  3. Understand current applications of HCI

Learning outcome: "To design effective interactions between humans and computers"

  • Design: Applying known principles to a new problem.
  • Design: An interative process of needfinding, prototyping, evaluating, and revising.
  • Effective interactions: usability, research, change
  • Between humans and computers: Not just interfaces, designing interactions.

Learning strategies:

  • learning by example
  • learning by doing
  • learning by reflection ("you are not your user")

1.3 Exploring HCI

New application areas:

  • Technologies: emerging technological capabilities that let us create new and interaction user interactions.
  • Domains: pre-existing areas that could be significantly disrupted by computer interfaces
  • Ideas: Theories about the way people interact with interfaces and the world around them.

Technology

  • Augmented Reality: Complements real world.
  • Ubiquitous Computing: Computing power anywhere, anytime. IoT. Wearable technology.
  • Human-robot interaction: Robot safety, societal impact, feedback/reinforcement systems
  • Mobile: Limitations in compute, but ubiquitous. Challenges in replacing computers.
  • Context-sensitive computing: Equipping UIs with historical, geographical, or other forms of contextual knowledge.

Idea

  • Gesture-based interaction: wrist-band motion
  • Pen/touch-based interaction
  • Information visualization: representing abstract data visually to help humans understand it
  • Computer-supported Cooperative Work (CSCW): Using computers to support people working together.
    • Divided by TIME and PLACE
  • Social Computing: How computers affect the way we interact and socialize
    • Recreating social norms within computational systems (chat interface, ppl using emojis to show emotion, etc)

Domain

  • Special needs: Prosthetics, communicating data to a blind person using sound.
  • Education: Making education challenging, but not due to bad interace. "Worst thing to do is make students busy worrying about the interface instead of the subject matter itself"
  • Healthcare: virtual reality, immersion therapy.
  • Security: Increasing the usability of security through HCI.
  • Games: good logical mapping between action & effect. Tight connection between task and interface.

2.1 Intro to Principles

Learning Goals:

  1. Focusing on the task (not tools)
  2. Role of the interface in mediating users and tasks
  3. The role of the user: Processor? Predictor? Participant?
  4. User experience at multiple levels

Tips for identifying a task

  1. Watch real users
  2. Talk to them (what is their goals/motives)
  3. Start small
  4. Abstract up - work from small observations, abstract up to an understanding of the tasks they're trying to complete.
  5. You are not your user

Usefulness and Usability

  • Useful: interface allows the user to achieve some task
  • Usability: More important, by understanding the task it allows you to create solutions beyond standard interfaces (e.g. paper maps vs navigation systems)
    • Reduce cognitive load: The total mental effort being used in working memory.

Views of the User: Processor

  • Humans as "processors": Take input in and give output out.
  • Interface must fit within human limits:
    • Think of what humans can sense, store in memory, and physically do in the world.
    • "Usability" equals an interface that is physically usable.
  • Interfaces are evaluated by quantitative experiments:
    • Numeric measurements on how quickly the user can complete a task, or react to a stimulus.
  • Less emphasis placed on this view

Views of the User: Predictor

  • Humans as "predictors": We want humans to predict what will happen in the world as a result of some action they take.
  • Interface must fit with their knowledge:
    • Help users learn what they don't know, and leverage what they already know.
  • Evaluated by qualitative studies:
    • "Ex situ" studies (in a controlled environment)
    • Task analyses, cognitive walk-throughs to understand the user's thought
  • Still focused on one user, one task.

Views of the User: Participant

  • Humans as "participant": Interface is interested in what's going on around the user (e.g. other tasks, other ppl they are interacting with).
  • Interface must fit with the context: Humans must be able to interact with the system in the context where they need it.
  • Evaluated by "in-situ" studies (studying interface + user within the real context of the task)

PPP Table

Views of User: Schools of Thought

  • Processor: From Behaviorist school. Systematic way to investigate behaviors in humans and other animals.
    • John B. Watson: Focus on observed behavior, not introspection.
    • Pavlov (dog), Skinner (operant conditiong, rats)
  • Predictor: From Cognitivism. We care about what the user is predicting, ie. thinking.
    • Is knowledge inborn, or through experience (Kant, Descartes)
    • Cognitive science (50s)
    • Chomsky, Carey, Minsky, Herbert Simon, etc.
  • Participant: Functionalism (Psychology), Systems (Psychology). More based from HCI.
    • Cares about environment of the user.
    • Edwin Hutchins, Lucy Suchman, etc.

Designing with Three Views

Test Case: Tesla interface screen

Processor model: Strictly observe user's behavior (e.g. timing)

  • Pros: May use existing data, enables objective comparisons (ie. text vs voice wrt speed),
  • Cons: Don't find reasons for differences. Can't differentiate by expertise (power user vs novice). Helps optimize but not redesign.

Predictor model:

  • Pros:
    • More complete picture of interaction: Ask users for input (e.g. interviews / focus groups, show prototypes and see what they think). Why users use different interface at different times (voice vs text).
    • Targets different levels of expertise: Ranging from power users to novice.
  • Cons:
    • Analysis may be expensive (going through transcripts, analysis takes time).
    • Analysis is subject to biases: Analyst can have bias in interpreting interview data.
    • Ignores broader interaction context: Only focuses on interface, not the real authentic environment in which they are using that interface.

Participant model:

  • Pros:
    • Evaluates interaction in context: Notice how users may get distracted.
    • Captures authentic user attention
  • Cons:
    • Expensive to perform and analyze
    • Requires real, functional interfaces (not prototypes). Hard to use this model when getting started with a new design task.
    • Subject to more uncontrollable variables.

Takeaway: We'll use all of these models at different times and in different contexts

We might start with a participant model where we just ride around with users watching what they do.

Based on that, we might observe that they spend a lot of time fumbling around to return to the same few locations.

So, then we might redesign an interface to include some kind of ‘bookmarking’ system, and present it to users in interviews.

There, they might tell us that they like the design, but further note that they don’t need a long list of bookmarks -- they really only need work and home.

Based on that, we might then design an interface where a simple swipe takes them to work or home. Then, we might test that with users to see how much more efficiently they’re able to start navigation when these kinds of shortcuts are provided.

The results of each design phase inform the next, and different phases call for different types of evaluation, which echo different models of the user.

Good Design, Bad Design

  • Good Design: A GPS system that warns you 20 seconds before you need to make a turn
  • Bad Design: A GPS system that warns you 2 seconds before you need to make a turn
  1. If you view the user just as a sensory processor, you might think that we need only alert the user a second before they need to turn: after all, human reaction time is less than a second.
  2. If you view the user as a predictor, you understand that they need time to slow the car down and make the turn, so they need a few more seconds to actually execute the action of turning after being alerted about the upcoming turn.
  3. And if you view the user as a participant, you understand that this is happening while they’re going 50 miles an hour down the road with a screaming toddler in the back seat trying to merge with a driver on a cell phone and another eating a cheeseburger.

Reflections: Views of the User

  • Bad processor model: Time tracking by manual entry, doesn't take realistic view of human's role in the system.
  • Good predictor model: Ed UI (interface to show upcoming lessons, playback bar), takes cognitive load off user so they can focus on learning.
  • Good participant model: Sleep tracking apps. It monitors my sleep cycles, rings at the optimal time, and tracks my sleep patterns to make recommendations.

User Experience - Sans Design

  • By my definition, user experience design is attempting to create systems that dictate how the user will experience them.
  • User experience on its own, however, is a phenomenon that emerges out of interactions between users and tasks via interfaces.
  • It goes beyond the simple interaction of the user with the interface to accomplish the task and touches on the emotional, personal, or more experiential elements of the relationship.
  • We can build this idea as an expanding understanding of the scope of what defines the ‘user experience’.

Design Challenge: Morgan on the Street

So, keeping in mind everything we’ve talked about, let’s design something for Morgan. Morgan walks to work. She likes to listen to audiobooks, mostly non-fiction. But she doesn’t just want to listen, she wants to be able to take notes and leave bookmarks as well. What would designing for her look like from the perspectives of viewing her as a processor, a predictor, and a participant?

  1. Processor: What is communicated, when and how.
    • Look at what information is communicated to Morgan, when and how.
  2. Predictor: How the interface meshes with Morgan's immediate needs.
    • Look at how the interface meshes with Morgan's needs with regard to this task: How easy it is to access, and how easy the commands are to perform.
  3. Participant: How the interface interacts with Morgan's life as a whole
    • Look at broader interactions between this interface and Morgan's other tasks and social activities. We might look at how increased access to books changes her life in other ways.

Conclusion

  1. Intefaces mediate between users and tasks.
  2. Usability: Efficiency and user satisfaction.
  3. 3 views of the user (processor, predictor, participant)
  4. UX at group / societal levels.
In [ ]:
 

2.2 Feedback Cycles

Feedback Cycles are Fundamental

Gulf of Execution

Gulf of execution: How hard is it to do in the interface what is necessary to accomplish those goals? What’s the difference between what the user thinks they should have to do, and what they actually have to do?

3 Components:

  1. Identify intentions - User must identify what their goal is in the context of the system.
  2. Identify Actions - User must be able to identify the actions necessary to accomplish their goals.
  3. Execute in Interface - User must be able to actually interface with the system to carry out the actions.

Example: Microwave

  1. identify intent - "microwave for one minute"
  2. identify action - "press heat, 1, 0, 0, start"
  3. execute - actually act on the sequence

5 Tips to Reduce Gulf of Execution

  1. Make functions discoverable - Ideally, the functions of the interface would be discoverable, meaning that they can find them, clearly labeled, within the interface.
  2. Let the user mess around - You want your user to poke around and discover things, so make them feel safe doing so. Don’t include any actions that can’t be undone. Avoid any buttons that can irreversibly ruin their document or setup. That way, the user will feel safe discovering things in your interface.
  3. Be consistent with other tools - We all want to try new things and innovate, but we can bridge gulfs of execution nicely by adopting the same standards many other tools use. Use Ctrl+C for Copy, Ctrl+V for paste. Using a diskette icon for ‘save’, even though no one has used floppy disks in years. This makes it easy for users to figure out what to do in your interface.
  4. Know your user - The gulf of execution has a number of components: identifying your intentions, identifying the actions to take, and taking the actions. For novice users, identifying their intentions and actions are most valuable, so making commands discoverable through things like menus is preferable. For experts, though, actually doing the action is more valuable. That’s why many experts prefer the command-line: although it lacks many usability principles targeted at novices, it’s very efficient.
  5. Feedforward - We’ve talked about feedback, which is a response to something the user did. Feedforward is more like feedback on what the user might want to do. It helps the user predict what the result of an action will be. For example, when you pull down on the Facebook newsfeed on your phone, it starts to show the refresh icon -- if you don’t finish pulling down, it doesn’t refresh. That’s feedforward: information on what will happen if you keep doing what you’re doing.

Gulf of Evaluation

Gulf of e valuation: How does the user becomes aware that their action succeeded.

3 Components:

  1. Interface output - What is actually displayed/communicated to the user.
  2. Interpretation - User needs to interpret the output to find out what it means for the system.
  3. Evaluation - User can evaluate whether the desired change occurred.

Example: Thermostat

  • interface output: heat is turned on, but lacks communication to user that heat is on. How to fix: display "heat" in UI.
  • heater might shut off for other reasons without indication - there is large gulf of evaluation
  • User has to do a lot to evaluate whether their actions did something.

5 Tips to Reduce Gulf of Evaluation

  1. Give feedback constantly - Don’t automatically wait for whatever the user did to be processed in the system before giving feedback. Give them feedback that input was received. Give them feedback on what input was received. Help the user understand where the system is in executing their action by giving feedback at every step of the process.
  2. Give feedback immediately - Let the user know they have been heard even when you’re not ready to give them a full response. If they tap an icon to open an app, there should be immediate feedback on that tap. That way, even if the app takes a while to open, the user knows that the phone recognized their input. That’s why icons briefly grey out when you tap them on your phone.
  3. Match the feedback to the action - Subtle actions should have subtle feedback, significant actions should have significant feedback.
  4. Vary your feedback - It’s often tempting to view our designs as existing solely on the screen, and so we want to give feedback on the screen. But the screen is where the interaction is taking place, so visual feedback can get in the way. Think about how auditory or haptic feedback can be used instead of relying just on visual feedback.
  5. Leverage direct manipulation - whenever possible, let the user feel like they’re directly manipulating things in the system. Things like dragging stuff around or pulling something larger or smaller are very intuitive actions because they feel like you’re interacting directly with the content.

Norman's Feedback Cycle Stages

7 questions to bridge the gulf of execution / evaluation:

  1. How easily can one determine the function of the device? This relates to the user’s goal: how easily can the user determine that the interface is capable of accomplishing their goal?
  2. How easily can one tell what actions are possible? This is important for the user to be able to construct their plan.
  3. How easily can one determine the mapping from intention to physical movement?
  4. How easily can one actually perform that physical movement?
  5. How easily can one tell what state the system is in?
  6. How easily can one tell if the system is in the desired state?
  7. How easily can one determine the mapping from system state to interpretation?

Norman also further articulates this by breaking the process into phases that span both execution and evaluation.

  1. The raw action and perception is referred to as visceral: this is the physical act of performing the plan, or the perceiving the outcome.
  2. The behavioral area is where we think about what steps to actually take or what we’re seeing from the interface.
  3. The reflective area is where we put it in the context of our goal: either translating a goal into a plan, or comparing the interpreted results to the original goal.

Tying it to KBAI:

Feedback Cycles - David's Car

  • Designing a system the way a user expects it to be designed is helps them across that gulf of execution (location of ignition)
  • Detecting engine on or electrical only - Throws alert telling you how to start the car. The output presented is easy to interpret, and the context in which it is given helps us evaluate pretty quickly.
  • So we have some trouble here with the gulf of execution, but the gulf of evaluation is still pretty short.
  • Areas of improvement:
    1. We know that the screen can show an alert that the brake needs to be depressed to turn the car on. Why not show that immediately after the car door opens when the car is off?
    2. Ignition sound - Differ the sound based on whether the car turned on, rather than using same sound.

Design Challenge - Credit Card Readers

What is the problem with the framing of the problem?

The right answer is: We shouldn't be thinking just about swiping or inserting a card, we should be thinking about the general purchasing process.

Conclusion

  1. We discussed feedback cycles’ incredible ubiquity in other fields and discussions.
  2. We talked about gulfs of execution, the distance between knowing what they want to accomplish and actually executing the steps to accomplish it.
  3. We talked about gulfs of evaluation, the distance between making some change in a system and evaluating whether or not the goal was accomplished.
  4. We introduced the seven questions we need to ask to bridge those gulfs.
In [ ]:
 

3.1 Intro to Methods

Lesson Goals

  • Students will understand the notion of user-centered design, especially as it contrasts with other design philosophies.
  • Students will understand the fundamental principles and approaches to user-centered design.
  • Students will understand the design life cycle.
  • Students will understand the goal of the unit.

Lesson Outcomes

  • Students will be able to describe the phases of the design life cycle.
  • Students will be able to describe the tenets of user-centered design.
  • Students will be able to identify qualitative vs. quantitative data and describe the value of each.

Assessments

  • Students will reflect on the application of the lesson’s concepts to their chosen area of HCI.
  • Students will engage in a short design task based on the lesson’s concepts.
  • Students will complete a short answer assignment in which they critique a provided interface from the perspective of the lesson’s concepts.
  • Students will complete a short answer assignment in which they select an interface to critique from the perspective of the lesson’s concepts.
  • Students will complete a short answer assignment in which they design a revision of one of the critiqued interfaces from the perspective of the lesson’s concepts.

User-centered Design

"In order to design interactions that are better than existing designs, it is important to take into consideration the user’s needs at every stage of the design process."

  • Definition: Design that considers the needs of the user throughout the entire design process.
  • What it does:
    1. Examine the user’s needs in depth, both by observing them and by asking them direct questions.
    2. After we start designing, we need to present our design alternatives and prototypes to the user to get feedback.
    3. When we near a design, we need to evaluate the quality of the design with real users.
  • Pitfalls of bad design:
    1. Design to meet functional requirements instead of real needs
    2. False assumption that designers knows the needs of user

Principles of User-Centered Design

ISO - Six principles to follow whwne pursuing user-centered design

  1. The design is based upon an explicit understanding of users, tasks and environments.
    • This means that we must gather information about the users, the tasks they perform, and where they perform those tasks, and leverage that knowledge throughout the design process.
  2. Users are involved throughout design and development.
    • Involvement can take on many forms, from regularly participating in interviews and surveys about designs and prototypes to actually working on the design team alongside the designers.
  3. The design is driven and refined by user-centered evaluation.
    • We absolutely must have real users evaluating the prototypes and interfaces we assemble.
  4. The process is iterative.
    • No tool is developed once, released, and then abandoned. Designs undergo constant iteration and improvement, even after being released.
  5. The design addresses the whole user experience.
    • Many designers are tempted to delineate a certain portion of the experience as their primary interest, but we must address the entire user experience.
  6. The design team includes multidisciplinary skills and perspectives.
    • Good teams for pursuing user-centered design include people with a number of different backgrounds, including psychologists, designers, computer scientists, domain experts, and more.

Stakeholders

"User-centered design isn’t just about catering to the user in the middle, but also in looking at the impact of our design on all the affected stakeholders."

  1. User - the person who uses the interface that we create
  2. Secondary - secondary stakeholders doesn't directly use interface but might interact with the output of it.
  3. Tertiary - people who never interact with the tool OR its output, but nonetheless impacted by the tool.

Examples:

  • Gradebook tool: User = Teacher (uses gradebook), Secondary = Parents (receives gradebook), Tertiary = Students (affected by grade)
  • Thought: How does parents having more consistent access to student grade information affect students? Might foster more involvement, but also could lead to helicopter parenting.

Reference: “The Inmate Are Running the Asylum” by Alan Cooper

  • Compares technology to a dancing bear at a circus. He notes that people marvel at a dancing bear not because it’s good at dancing, but because it dances at all.
  • Engineers shouldn't be UI designers

The Design Life Cycle

  1. Needfinding - Gather a comprehensive understanding of the task that users are trying to perform. Includes "who is the user", "what is the context of the task", "why are they doing the task".
  2. Design Alternatives - Very early ideas on different ways to approach the task. Emphasis on multiple designs to avoid fixating on one idea.
  3. Prototyping - Take ideas with most potential, build them into prototypes that can be put in front of the user.
  4. User Evaluation - Take ideas and put them in front of users. Get feedback... and go back to step 1

Design Life Cycles meet Feedback Cycles

"In HCI, we’re designing interfaces to accomplish goals, and then based on the output of our evaluations with those interfaces, we judge whether or not the goals of the interface were accomplished. Then, we repeat and continue."

"In many ways, we’re doing the same things that our users are doing: trying to understand how to accomplish a task in an interface. "

Qualitative vs Quantitative Data

Quantitative Data: observations described or summarized numerically. Quantitative data involves anything numeric.

  • Quantitative data supports formal tests, comparisons, and conclusions.
  • Only captures a narrow view of what we might be interested in examining.
  • Strong for measuring a small class of data points.

Qualitative Data: observations described or summarized non-numerically.

  • Includes natural language (surveys, natural response, reports)
  • Much broader and more general picture of what we’re examining.
  • More prone to biases.
  • Harder to generate formal conclusions based on qualitative data.
  • In some circumstances, we can convert qualitative data into quantitative data. (Convert free response to quant data by coding, turn multiple choice into nominal data)

Uses:

  • Needfinding - focus on qualitative descriptions of tasks or experiences
  • Prototyping - start focusing on quantitative, numeric improvements.
  • Using both is called mixed method.

In [ ]:
 

3.2 Ethics and Human Research

Origin of Institutional Review Board (IRB)

Infamous studies:

  1. Tuskegee Syphilis study, where rural African-American men were injected with syphilis to study its progression.
  2. Milgram obedience experiment, where participants were tricked into thinking they had administered lethal shocks to other participants to see how obedient they would be.
  3. Stanford Prison Experiment, where participants were psychologically abused to test their limits.

Response:

  1. National Research Act of 1974 - led to the creation of Institutional Review Boards to oversee research at universities
  2. Belmont Report - summarizes basic ethical principles that research must follow.
    • benefits to society outweigh the risks to the subjects.
    • subjects be selected fairly (direct response to the Tuskegee syphilis study)
    • demanded rigorous informed consent procedures
    • positive results of research outweigh the negatives and that participant rights are always preserved

The Value of Research Ethics

  • IRB’s main task is to make sure the potential benefits of a study are worth the potential risk. Need to make sure potential of benefit is higher than risk.
  • Ensure data we gather is useful.
  • Issues of coercion: When participants feel coerced to participate in research, the data they actually supply may be skewed by that negative perception, which impacts our data.
  • Issues of bias: We might demand too much from participants, or ask questions that are known to affect our results.

IRB Protocols

See document link below for explanation of:

  • Basics
  • Human Subject Interfaction
  • Consent Procedures

https://docs.google.com/document/d/1e3BbJMHNABxvss1i1bXroJY_CzzqQrEe7vud5s_yfVE/edit#heading=h.weyas98hw358

Research Ethics and Industry

  • Industry constantly runs experiments on users.
  • Facebook experiment, "Experimental evidence of massive-scale emotional contagion through social networks". Would have never passed IRB, because manipulated user's mood for experimentation and did not provide informed consent.
  • Was it ethical?
    • Yes - Facebook did have an internal IRB. Term of service covered it.
    • No - Users couldn't opt out. Users not aware,

Paper spotlight - Evolving the IRB: Building Robust Review for Industry Research

  • Industry only governs itself, paper proposes new standards for industry
  • Facebook refer studies to external reviewer and/or external IRB.
In [ ]:
 

3.3 Needfinding and Requirements Gathering

Data Inventory

Before we start our needfinding exercises, we also want to enter with some understanding of what data we want to gather.

  1. Who are the users? What are their ages, genders, levels of expertise?
  2. Where are the users? What is the environment?
  3. What is the context of the task? What else is competing for users’ attention?
  4. What are their goals? What are they trying to accomplish?
  5. Right now, what do they need? What are the physical objects? What information do they need? What collaborators do they need?
  6. What are their tasks? What are they doing physically, cognitively, socially?
  7. What are the subtasks? How do they accomplish those tasks?

Problem Space

In order to do some real needfinding, the first thing we need to do is identify the problem space.

  1. Where is the task occurring?
  2. What else is going on?
  3. What are the user’s explicit and implicit needs?
  • As we’re going about needfinding, we want to make sure we’re taking the broad approach: understanding the entire problem space in which we’re interested, not just focusing narrowly on the user’s interactions with an interface.
  • So, in our exploration of methods for needfinding, we’re going to start with the most authentic types of general observation, then move through progressively more targeted types of needfinding.

User Types

Significance: We want to understand who we’re designing for.

  • Account for all types of user who we're designing the product for.

Audiobook example:

  1. For kids, and/or adults
  2. Experts at exercising, and/or novices
  3. Experts at listening to audiobooks, and/or novices?

"Differentiate whether I’m designing for business people who want to be able to exercise while reading, or exercisers who want something else to do while exercising."

  • identify these different types of users, and perform needfinding exercises on all of them.
  • Reference: Doing Cultural Studies by Hugh Mackay and Linda Janes.

Avoiding Bias in Needfinding

  1. Confirmation bias. Confirmation bias is the phenomenon where we see what we want to see. We enter with some preconceived ideas of what we’ll see, and we only notice those things that confirm our prior beliefs.
    • Try to avoid this by specifically looking for signs that you’re wrong, by testing your beliefs empirically, and by involving multiple individuals in needfinding.
  2. Observer bias. When we’re interacting directly with users, we may subconsciously bias them, e.g. being more helpful to those using the interface as intended, rather than competitor's design.
    • Try to avoid this by separating experimenters with motives from the participants, by heavily scripting interactions with users, and by having someone else review your interview scripts or surveys for leading questions.
  3. Social desirability bias. If you’re testing an interface and the participants know you were the designer, they’ll want to say nice things about it to make you happy.
    • Try to avoid this by conducting more natural observations / recording objective data. Stay out of participant's way
  4. Voluntary response bias. People with stronger opinions are more likely to respond to optional surveys. Risk oversampling extreme views.
    • Try to avoid this by limiting how much of the survey content is shown to users before they begin the survey, and by confirming any conclusions with other methods.
  5. Recall bias. Particpants forget what they did, how they felt. Leads to misleading data.
    • Try to avoid this by studying tasks in context by having users think aloud during activities s or conducting interviews during the activity itself.

Naturalistic Observation

Definition: Fly on the wall approach. Note down what people are doing, and let that guide the design.

  • How: Note specific observations, then generalize to abstract tasks. This avoids confirmation bias. Think about what they're doing, and how it would affect how they'd want to interact with the design.
  • Cons: Ethically constrained, can't use PII data. Can't know what people are thinking.

5 Tips for Naturalistic Observation

  1. Take notes. Don’t just sit around watching for a while; be prepared to gather targeted information and observations about what you see.
  2. Start specific, then abstract. Write down the individual little actions you see people doing before trying to interpret or summarize them. If you jump to summarizing too soon, you risk tunnel vision.
  3. Spread out your sessions. Rather than sitting somewhere for two hours one day and moving on, try to observe in shorter 10-15 minute sessions several times. You may find interesting different information, and your growing understanding and reflection on past exercises will help your future sessions.
  4. Find a partner. Observe together with someone else. Take your own notes, then compare them later so you can see if y’all interpreted the same scenarios or actions the same way.
  5. Look for questions. Naturalistic observation should inform the questions you decide to ask participants in more targeted needfinding exercises. You don’t need to have all the answers based on observation alone: what you need is questions to investigate further.

Participant Observation

Definition: Be a participant in your own study.

  • Warning: Be sure not to over-index on your personal observations (you're not your user).
  • Use these experiences to inform what you ask users going forward.

Hacks and Workarounds

Significance: Look at hacks users employ.

  • How do they user UI in non-intended ways
  • How do they break out of an interface to accomplish a task that could be accomplished with the interface (e.g. post-it notes in desk setting. They're still useful).
  • Don't assume you understand why, ask them.
  • Uncover errors: Users use hacks to get around these. Good sign to fix it.

Errors:

  • Can use them to understand more about user's mental model
  • Errors vs mistakes: Errors implies there's nothing wrong with user's mental model of how the UI works, problem is they can easily forget current state.
    • Mistakes: Mental model is weak and more prone to mistakes (e.g. navigating in Mac when accustomed to PC)

Apprenticeship and Ethnography

Significance: Use ethnography (living close to users you're studying) to understand domain knowledge necessary to design new interface / improve the user task.

  • Why: Sometimes, no amount of observation suffices to get full understanding of the way the task works.

Interviews and Focus Groups

  • Interviews: 1:1, Focus Groups: Group conversation.
  • Focus groups run the risk of "overly convergent thinking" (all agreeing with each other).

5 tips for better interviews

  1. Focus on the six W’s in writing questions: Who, what, where, when, why, and how? Try to avoid questions that lend themselves to one-word or yes-or-no answers: those are better gathered via surveys. Use your interview questions to ask open-ended, semi-structured questions.
  2. Be aware of bias: Look at how you’re phrasing your questions and interactions and make sure you’re not predisposing the participant to certain views. If you only smile when they say what you want them to say, for example, you risk biasing them to agree with you.
  3. Listen: Many novice interviewers get caught up in having a conversation with the participant rather than just gathering data from the participant. Make sure the participant is doing the vast majority of the talking, and don’t reveal anything that might predispose them to agree with you.
  4. Organize the interview: Make sure to have an introduction phase, some lighter questions to start to build trust, and a summary at the end so the user understands the purpose of the questions. Be ready to push the interview forward or pull it back on track.
  5. Practice!: Practice your questions on your friends, family, or research partners in advance. Rehearse the entire interview. Gathering subjects is tough, so when you actually have them, make sure you’re ready to get the most out of them.

Examples of good / bad interview Qs

  • Bad: "Do you exercise", Good "How often do you exercise". Latter is more open-ended.
  • Bad: "Do you e xercise for A or B", Good "Why do you exercise" - Don't present dichotomy / yes or no questions.
  • Good: "What, if anything, do you listen to while exercising"
  • Bad: "What smartphone do you use to listen to something while exercising" Good: "What device do you listen to while..." - Former assumes you use a smartphone.
  • Bad: "We're developing in app for audiobooks while exercising... are you interested" - Introduces "social desirability bias", no one wants to say they don't want to join. Good: "Would you be interested in an app for audiobooks while exercising"

Think-Aloud

Definition: Ask users to talk about their perceptions of the task in the context of the task (while they're doing it).

  • Pros: Can get at user's thoughts that they forget if asked afterwards.
  • Cons: Change user's viewpoint while doing task - they might approach it more deliberately than they would in real life.
    • Workaround: "Post-event protocol" - we wait to get the user's though until immediately after the activity.

Surveys

5 Tips of Good Surveys:

  1. Less is more. The biggest mistake novice survey designers make is to ask way too much. That affects the response rate and reliability of your data. Ask the minimum number of questions necessary to get the data that you need, and only ask questions you know you’ll use.
  2. Be aware of bias. Look at how you’re phrasing the questions: are there positive or negative connotations? Are participants implicitly pressured to answer one way or the other?
  3. Tie them to the inventory. Make sure every question on your survey connects to some of the data that you want to gather. Start with the goals for the survey and write the questions from there.
  4. Test it out! Before sending it to real participants, have your coworkers or colleagues test out your survey. Pretend they’re real users, and see if you would get the data you need from their responses.
  5. Iterate! Survey design is like interface design. Test out your survey, see what works and what doesn’t, and revise it accordingly. Give participants a chance to give feedback on the survey itself so that you can improve it for future iterations.

Writing Good Survey Questions

  1. Be Clear
    • Make sure the user actually understands what we’re asking about. We want them to have clear foundation for answering the question.
    • If we’re using a numeric scale, we want to provide labels that explain what the scale means (e.g. "1 - Highly Dissatisfied")
    • If we’re providing ranges, we want to avoid overlapping ranges so the user isn’t confused about what to select.
    • Timebox the question (e.g. "In past seven days, how many times have you exercised?".
  2. Be Concise - Use plain language
  3. Be Specific
    • Break questions down into smaller, specific questions that get at a big idea.
    • Avoid "doubled barrel" questions, ie asking about two things at once. (e.g. "How satisfied are you with the speed AND availability of your mobile connection")
    • Avoid questions where the user could have conflicting ideas at the same time: if they could have conflicting ideas about different parts, break that into multiple short questions. ("How satisifed were you with your food" vs "How satisfied were you with the temperature of your food")
  4. Be Expressive - Allow the user to be expressive.
    • Emphasize in the question prompt that they’ll be providing an opinion -- this makes the user more comfortable giving their thoughts. ("Is your subscription price too high" vs "Do you feel your subscription price is too high, too low, just right")
    • When providing opinion ranges like ‘agree’ vs. ‘disagree’, always provide an odd number of options so that the user can respond neutrally, and at least 5 options so users feel more comfortable differentiating their level of agreement.
    • When asking a multiple-choice question, if there’s a chance a user could have more than one thought, let them choose more than one (checkbox)
    • Avoid binary questions
  5. Be Unbiased - How?
    • Giving an ‘other’ option limits bias towards your pre-selected options.
    • Watch for leading questions.
    • Watch for loaded questions.
    • Be careful with closed-ended questions: provide an ‘Other’ option.
    • Avoid leading questions ("Did our brand-new AI-based interface generate better recommendations?")
    • Avoid loaded questions ("How much time have you wasted on social media").
  6. Be Usable - Use HCI principles in designing the survey itself.
    • Provide a progress bar so that the user can evaluate how far they are into the survey.
    • Make the pages approximately consistent lengths so that the user has an accurate gauge for what it means to be on page 3 of 5 or something similar.
    • Order your questions logically: group questions about demographics, questions about prior experience, questions about future desires, etc. such that they follow a natural flow.
    • Alert users when questions are unanswered, but don’t require them to be answered: some users will feel uncomfortable answering some questions, so it’s good to leave them unrequired, but you also want to avoid users unknowingly skipping. So, tell them if they’ve skipped, but don’t force them to answer.
    • And finally, preview the survey yourself. Your users might not tell you if there are errors, so make sure to take it yourself.

Exercise: Bad Survey

Other Data Gathering Methods

  1. Existing UI evaluation - Critique interfaces that already exist using some of the evaluation methods we cover in the Evaluation lesson.
    • E.g. if you wanted to design a new system for ordering take-out food, you might evaluate the interfaces of calling in an order, ordering via mobile phone, or ordering via a web site
  2. Product Reviews - See what people already like/dislike about existing products.
  3. Data Logs - Get logs of user interaction that have already been generated
    • For example, say you wanted to build a browser that’s better at anticipating what the user will want to open next. You could grab data logs and look for trends both within and across users

Exercise: Needfinding Pros / Cons

Design Challenge: Needfinding for Book Reading

  • Finding participants: Maybe go to library, find participants.
  • Deciding the users: Do we want to narrow down to subset, or all book readers?

Iterative Needfinding

  • Observation inform interviews / surveys (why do exerciser only use one earbuds?)
  • Evaluation also feeds into needfinding (what worked / didn't work). What follow up data do we need to improve design.

Revisiting the Inventory

  • During needfinding, you'll gather a lot of data.
  • Pay attention to conflicting data - Are these cases where the designer understands the elements of task design that users don't, or whether your expertise hasn't developed to the point of understanding the task.
  • Once you go through data gathering process, go back to the "Data Inventory" tenets to make sure you've answered those questions.

Representing the Need

  1. Step-by-step task outline
  2. Hierarchical network - develop task outline into hierarchy, more complex than DAGs.
  3. Structural diagram - Augment with diagram of structural relationships amongst elements in the system and illustrate how they interact.
  4. Flowchart - Introduce decision-making points, or points of interruptions.
  • Notice how these representations are very similar to the outcomes of the task analyses we talk about in the principles unit of our conversations.
  • We can similarly use the data gathered from here to summarize a more comprehensive task analysis that will be useful in designing and prototyping our designs.

Defining the Requirements

Definition: Needs that our final interface must meet.

  • Specific and evaluate-able.
  • Can include components outside th user's tasks

User data requirements:

  1. Functionality: what the interface can actually do.
  2. Usability: how certain user interactions must work.
  3. Learnability: how fast the user can start to use the interface.
  4. Accessibility: who can use the interface.

External Requirements:

  1. Compatibility: what devices the interface can run on.
  2. Compliance: how the interface protects user privacy.
  3. Cost: how much the final tool will cost.
In [ ]:
 

2.3 Direct Manipulation

Definition: Direct manipulation is the principle that the user should feel as much as possible like they’re directly controlling the object of their task.

Invisible Interface: When the interface actually disappears . Users spends no time thinking about how to engage with the interface, all their time is dedicated to thinking about the task they're performing.

  • e.g. Desktop interface (terminal vs finder), latter doesn't need a lot of prior knowledge compared to terminal.

Paper Spotlight: "Direct Manipulation Interfaces"

Hutchins, Edwin & Hollan, James & Norman, Donald. (1985). Direct Manipulation Interfaces. Human-computer Interaction. 1. 311-338. 10.1207/s15327051hci0104_2. https://www.lri.fr/~mbl/ENS/FONDIHM/2013/papers/Hutchins-HCI-85.pdf

Significance:

  • Hutchins authored the foundational paper for distributed cognition, and Norman created one of the most accepted sets of design principles.
  • Two aspects of directness, Distance and Direct Engagement

Distance: Semantic vs Articulatory Distance

Definition: distance between the user’s goals and the system itself. Encompasses gulf of execution/evaluation.

"...the feeling of directness is inversely proportional to the amount of cognitive effort it takes to manipulate and evaluate a system”."

  1. Semantic distance - Difference between the user’s goals and their expression in the system. ie, how hard it is to know what to do.
    • Captures "identify actions" and "identify intentions" aspect of gulf of execution.
  2. Articulatory distance - Distance between that expression and its execution. ie, how hard it is to actually do what you know to do.
    • Captures "execute" actions phase of gulf of execution.

"The user starts with some goals, translates them into their form of expression in the interface, and executes that expression. The system then returns some output in some form of expression, which is translated by the user into their understanding of the new state of the system."

Direct Engagement

Definition: Providing the user the feeling that they are directly controlling the objects.

"The systems that best exemplify direct manipulation all give the qualitative feeling that one is directly engaged with control of the objects--not with the programs, not with the computer, but with the semantic objects of our goals and intentions."

Examples:

  1. If we’re moving files, we should be physically moving the representation of the files.
  2. If we’re playing a game, we should be directly controlling our characters.
  3. If we’re navigating channels, we should be specifically selecting clear representations of the channels we want.

Exploring HCI: Direct Manipulation & VR

  • VR is way towards direct engagement through gesture interfaces.
  • But often feedback is lacking, how to best give the right feedback?

Exercise: Direct Manipulation

Apple touchpad actions, which are direct engagements:

  1. Pressing down to click - yes
  2. Pressing two fingers down to right-click - no
  3. Dragging two fingers up and down to scroll - yes
  4. Double-tap to zoom in and out - no
  5. Pinching to zoom in and out - yes

Making Indirect Manipulation Direct

Significance: "Direct manipulation isn’t just about designing interactions that feel like you’re directly manipulating the interface. It’s also about designing interfaces that lend themselves to interactions that feel direct."

  • Notification center (swipe to left from right side) - no meaning behind why it is on the right, but the action makes it feel direct.
  • Choice of animation can make an indirect manipulation feel more direct (five finger clearing the screen on Mac touchpad). Similar with launchpad.

Invisible Interfaces

Example: Stylus vs mouse - stylus makes the gulf much narrower to the point of the interface becoming invisible.

Good vs Bad Design of "invisible-ness"

  • Good: Interfaces that are metaphorically invisible
  • Bad: Interfaces that are literally invisible.
    • Gesture-based interfaces are literally invisible, so we need to provide really good feedback to give sense of success of a gesture.

Invisibility by Learning

Significance: Interfaces become invisible not just through great design, but also through users learning to use them.

  • Just because the interface has become invisible doesn’t mean it’s a great interface. We cannot expect users to spend a lot of time trying to understand the interface.

Invisibiility by Design

Goal: Users should feel immediately as if they’re interacting with the task underlying the interface.

  • North star, not often met.

5 Tips fo Invisible Interfaces

  1. Use affordances - Affordances are places where the visual design of the interface suggests how it is to be used.
    • Buttons are for pressing, dials are for turning, switches are for flicking. Use these expectations to make your interface more usable.
  2. Know your user - Invisibility means different things to different people. Invisibility to a novice means that all the interactions is natural, but invisibility to an expert means maximizing efficiency.
  3. Differentiate your user - If serving multiple user types, provide multiple ways of accomplishing tasks.
    • "copy"/"paste" under Edit menu for novices, but also Ctrl+C / V for experts.
  4. Let your interface teach - Teach via design rather than manuals.
  5. Talk to your user - Ask them what they’re thinking while they use an interface, and check whether they're talking about the task or the interface. If they're talking about the interface, the design is visible.

Design Challenge: The Universal Remote

Challenge: How would we design an invisible interface for universal remote control, one that doesn’t have the learning curves that most have?

Takeaway:

  • Voice interfaces - Challenge is how to exploit the underlying knowledge base of the user (ie. content, media type)
In [ ]:
 

2.4 Human Abilities

Information processing model:

  1. Input (Perception) - How stimuli are sensed from the world and perceived in the mind.
  2. Processing (Cognition) - How the brain stores and reasons over the input it’s received.
  3. Output (Response) - How the brain then controls the individual’s actions in the world.

Perception

Visual

  1. The center of eye is most useful for focusing closely on color or tracking movement.
  2. Peripheral vision is good for motion detection, but not for color or detail.
  3. Women are less likely (1 / 200) to be color blind compared to men (1 / 12). Thus avoid relying on color to understand an interface.
  4. Sight is directional - easy to miss feedback
  5. Visual acuity decreases with age - be flexible to visual needs depending on age group.

Auditory

  1. Humans can discern noises based on pitch / loudness
  2. Good at localizing sound (near / far away)
  3. Can't filter out auditory information as easily as visual. Can lead to feeling of being overwhelmed

Haptic

  1. Feel different types of input: pressure, vibration, temperature.
  2. Can't easily filter touch feedback.
  3. Unlike listening, touch feedback is only available to the person it is touching, so can provide more personal feedback.
  4. Traditionally more natural (keyboard), more difficult with touchscreens.

Design Challenge - Message Alerts

Q: How to alert someone when they receive a text message, without disturbing others.

Solutions: Smartphones have cameras and light sensors - use that to determine where the phone is to determine what type of alert to use. (This could lead to a lot of surprise though).

Memory

3 kinds of memory:

  1. Perceptual Store
  2. Short Term Memory
  3. Long Term Memory

Perceptual Store

Definition: very short term, less than a second.

Baddeley & Hitch's model of working memory:

  1. Visuospatial sketchpad - holds visual information for active manipulation.
  2. Phonological loop (aka articulatory loop, phonological store) - Verbal / auditory informations. Stores sounds / speech you've heard recently.
  3. Episodic buffer - Integrates info from other systems, as well as chronological ordering of information.
  4. Central executive - Responsible for coordinating these various systems.

Short Term Memory

Definition: Capacity for holding a small amount of information in an active, readily available state for a short interval.

"Chunking" - bits of short-term memory. We can only hold 4-5 chunks at a time.

Takeaways:

  • Words are easier to remember than random letters because of memory. Easier to remember phone numbers by "chunking" list of numbers.
  • Identification is easier than recall - thus minimize memory load on the user by relying more on their ability to recognize things than to recall them.

Long Term Memory

Definition: Seemingly unlimited store of memories. But harder to put something in there. Generally need to put it into short-term memory several times.

Leitner system: A way of memorizing key-value pairs (ie. flashcards).

  • Things I don’t remember this time get moved back to the left, any that I do remember stay on the right. Repeat.
  • Things that I remember least are loaded into short-term memory most often, solidifying them in my long-term memory.

Cognition

Learning

"When we design interfaces, we are in some ways hoping the user has to learn as little as possible to find the interface useful. "

2 Kinds of Learning:

  1. Procedural Learning - Learning by doing (e.g. playing an instrument). Mainly what is covered in HCI.
    • Unconscious competence - "(When you have strong procedural knowledge), it can be difficult to explain to someone who lacks that competence because you aren’t sure what makes you good at it."
    • This leads to experts designing interfaces that are hard for others to use.
  2. Declarative Learning - Learning about something (e.g. association of concepts).

Cognitive Load

Definition: The amount of working memory resources used.

2 major implications on designing interfaces:

  1. Reduce the cognitive load posed by the interface so the user can dedicate more of their resources to the task itself.
  2. Understand, for our context, what other tasks are competing for cognitive resources.
    • e.g. Driving GPS - be aware that user have low cognitive resource to devote to interacting with the interface

Example: Programming

  • High cognitive load. Lots of short term memory required to remember syntax, variables, et cetera.
  • IDEs can mitigate these issues through error / type checking.
  • Distributed cognition - "Distributing the cognitive load more evenly between components of the system: myself and the computer"

5 Tips to Reduce Cognitive Load

  1. Multiple modalities - Describe things verbally, but also present visually to prevent over-indexing on one.
  2. Let modalities complement each other - Don't present different content in multiple modalities. Make it complementary.
  3. Give the user control of the pace - Time-based events stresses users out. Let them control the pace.
  4. Emphasize essential content and minimize clutter - Emphasize the most common actions while still giving access to the full range of possible options. Don't just show all of them in a flat context.
  5. Offload tasks - e.g. if a user needs to remember something they entered on a preview screen, show them what they entered. If there’s a task they need to do manually that can be triggered automatically, trigger it automatically.

Motor System

Significance: In designing interfaces, we’re also interested in what is physically possible for users to do. Includes how fast / precise they can take an action (e.g. tapping).

Example: Spotify control widget

  • On the left is the version that sites in the tray at the top of the screen, on the right is the version on the lock screen.
  • 'X' button closes, which is consistent with other applications.
  • '+' sign is not consistent, unclear what it does.
  • Precision of tapping on right is much lower, because buttons are closer together. Leads to error in the motor system because the designer didn't consider this into their design.
  • We need to make our interface tolerant of errors.
  • In this case, make user double-tap to actually close the app, or add confirmation action.
In [ ]:
 

3.4 Design Alternatives

In [ ]:
 

2.5 Design Principles and Heuristics

In [ ]:
 

2.6 Mental Models and Representations

In [ ]:
 

3.5 Prototyping

In [ ]:
 

2.7 Task Analysis

In [ ]:
 

2.8 Distributed Cognition

In [ ]:
 

3.6 Evaluation

In [ ]:
 

2.9 Interfaces and Politics

In [ ]:
 

2.10 Conclusion to Principles

In [ ]:
 

3.7 HCI and Agile Development

In [ ]:
 

3.8 Conclusion to Methods

In [ ]:
 

4.1 Applications: Technology

In [ ]:
 

4.2 Applications: Ideas

In [ ]:
 

4.3 Applications: Domains

In [ ]:
 

5.1 Course Recap

In [ ]:
 
In [ ]:
 

5.3 Next Steps